Goto

Collaborating Authors

 Musculoskeletal


Best Home Gym Setup (2026): Adjustable Weights, Resistance Bands, and More

WIRED

Lifting weights can keep you carrying groceries and riding bikes even as you get older. To join or not to join a gym: That is the question. If you opt out of building a home gym, you can join a club and have access to more weights and machines. Friends and classes motivate you to keep coming, and that monthly bill keeps you disciplined. On the other hand, gym memberships are steep, workouts can get hijacked by bullies, and going to the gym is an additional commute.


Y ouTubePD: A Multimodal Benchmark for Parkinson's Disease Analysis Supplementary Material

Neural Information Processing Systems

We include all our annotations and extracted landmarks. This ensures that we uphold the highest standards of ethical data usage. In Table A1, we summarize the severity label distribution in Y ouTubePD. We also summarize the demographic distribution in Y ouTubePD, split between PD-positive and healthy control (HC), or PD-negative, subjects. This decision is based on the clinician's suggestion, since an accurate UPDRS facial expression rating would require more This strategy also allows for a finer classification.



Why do your joints hurt when it's cold? We asked a doctor.

Popular Science

Why do your joints hurt when it's cold? And what you can do to ease the aches. Winter can amplify aches and pains through pressure shifts, reduced movement, and muscle tightening. Breakthroughs, discoveries, and DIY tips sent six days a week. Each winter, over a million "snowbirds" descend on places like Florida and Arizona to avoid the season's freezing temperatures and instead, ride it out in warmth.


Predicting Parkinson's Disease Progression Using Statistical and Neural Mixed Effects Models: Comparative Study on Longitudinal Biomarkers

Tong, Ran, Wang, Lanruo, Wang, Tong, Yan, Wei

arXiv.org Machine Learning

Predicting Parkinson's Disease (PD) progression is crucial, and voice biomarkers offer a non-invasive method for tracking symptom severity (UPDRS scores) through telemonitoring. Analyzing this longitudinal data is challenging due to within-subject correlations and complex, nonlinear patient-specific progression patterns. This study benchmarks LMMs against two advanced hybrid approaches: the Generalized Neural Network Mixed Model (GNMM) (Mandel 2021), which embeds a neural network within a GLMM structure, and the Neural Mixed Effects (NME) model (Wortwein 2023), allowing nonlinear subject-specific parameters throughout the network. Using the Oxford Parkinson's telemonitoring voice dataset, we evaluate these models' performance in predicting Total UPDRS to offer practical guidance for PD research and clinical applications.


RAD: Towards Trustworthy Retrieval-Augmented Multi-modal Clinical Diagnosis

Li, Haolin, Dai, Tianjie, Chen, Zhe, Du, Siyuan, Yao, Jiangchao, Zhang, Ya, Wang, Yanfeng

arXiv.org Artificial Intelligence

Clinical diagnosis is a highly specialized discipline requiring both domain expertise and strict adherence to rigorous guidelines. While current AI-driven medical research predominantly focuses on knowledge graphs or natural text pretraining paradigms to incorporate medical knowledge, these approaches primarily rely on implicitly encoded knowledge within model parameters, neglecting task-specific knowledge required by diverse downstream tasks. To address this limitation, we propose Retrieval-Augmented Diagnosis (RAD), a novel framework that explicitly injects external knowledge into multimodal models directly on downstream tasks. Specifically, RAD operates through three key mechanisms: retrieval and refinement of disease-centered knowledge from multiple medical sources, a guideline-enhanced contrastive loss that constrains the latent distance between multi-modal features and guideline knowledge, and the dual transformer decoder that employs guidelines as queries to steer cross-modal fusion, aligning the models with clinical diagnostic workflows from guideline acquisition to feature extraction and decision-making. Moreover, recognizing the lack of quantitative evaluation of interpretability for multimodal diagnostic models, we introduce a set of criteria to assess the interpretability from both image and text perspectives. Extensive evaluations across four datasets with different anatomies demonstrate RAD's generalizability, achieving state-of-the-art performance. Furthermore, RAD enables the model to concentrate more precisely on abnormal regions and critical indicators, ensuring evidence-based, trustworthy diagnosis. Our code is available at https://github.com/tdlhl/RAD.


Motion2Meaning: A Clinician-Centered Framework for Contestable LLM in Parkinson's Disease Gait Interpretation

Nguyen, Loc Phuc Truong, Do, Hung Thanh, Nguyen, Hung Truong Thanh, Cao, Hung

arXiv.org Artificial Intelligence

AI-assisted gait analysis holds promise for improving Parkinson's Disease (PD) care, but current clinical dashboards lack transparency and offer no meaningful way for clinicians to interrogate or contest AI decisions. To address this issue, we present Motion2Meaning, a clinician-centered framework that advances Contestable AI through a tightly integrated interface designed for interpretability, oversight, and procedural recourse. Our approach leverages vertical Ground Reaction Force (vGRF) time-series data from wearable sensors as an objective biomarker of PD motor states. The system comprises three key components: a Gait Data Visualization Interface (GDVI), a one-dimensional Convolutional Neural Network (1D-CNN) that predicts Hoehn & Yahr severity stages, and a Contestable Interpretation Interface (CII) that combines our novel Cross-Modal Explanation Discrepancy (XMED) safeguard with a contestable Large Language Model (LLM). Our 1D-CNN achieves 89.0% F1-score on the public PhysioNet gait dataset. XMED successfully identifies model unreliability by detecting a five-fold increase in explanation discrepancies in incorrect predictions (7.45%) compared to correct ones (1.56%), while our LLM-powered interface enables clinicians to validate correct predictions and successfully contest a portion of the model's errors. A human-centered evaluation of this contestable interface reveals a crucial trade-off between the LLM's factual grounding and its readability and responsiveness to clinical feedback. This work demonstrates the feasibility of combining wearable sensor analysis with Explainable AI (XAI) and contestable LLMs to create a transparent, auditable system for PD gait interpretation that maintains clinical oversight while leveraging advanced AI capabilities. Our implementation is publicly available at: https://github.com/hungdothanh/motion2meaning.


Stephen Hawking's computer gets a glow up: AI-powered AVATAR creates new possibilities for people with severe disabilities

Daily Mail - Science & tech

Ghislaine Maxwell's ultimate humiliation: Epstein's sex trafficker girlfriend poses in outrageous outfits and exposes herself in dozens of photos released from the billionaire paedophile's files Silent Trump flees growing storm over Epstein'cover-up' as he jets off for holidays without ANY comment How you can ease the agony of carpal tunnel syndrome. The'change of pace' sex move that sends ANY woman wild. Here's the precise moment to deploy it and what to do with your eyes. Corey Feldman walks back claim that Corey Haim'molested' him after late star's mother slammed his comments Emily in Paris cast left'aghast' and'walking on eggshells' as off-camera drama becomes overwhelming... and whispers swirl about a CURSE Truth about THIS photo of Karoline Leavitt's face... and why if she was non-binary and disabled, Vanity Fair would never have done this: KENNEDY After 27 years as a TV anchor I was suddenly pulled off screens. My boss's explanation was a brutal lesson in loyalty I was dead for 105 minutes and learned exactly how you get into heaven... then Jesus spoke six words into my mind and sent me back Jake Paul's jaw is broken in Anthony Joshua battering: YouTuber-turned-boxer rushes to hospital I was falsely accused of being the Brown University shooter... America's great divide laid bare as Wall Street splurges record bonuses on outrageously lavish homes while the rest of the country struggles Andrew's fury at anyone who doesn't bow and scrape.


Scientists Thought Parkinson's Was in Our Genes. It Might Be in the Water

WIRED

Scientists Thought Parkinson's Was in Our Genes. New ideas about chronic illness could revolutionize treatment, if we take the research seriously. Amy Lindberg spent 26 years in the Navy and she still walked like it--with intention, like her chin had someplace to be. But around 2017, her right foot stopped following orders. Lindberg and her husband Brad were five years into their retirement. After moving 10 times for Uncle Sam, they'd bought their dream house near the North Carolina coast. They had a backyard that spilled out onto wetlands. From the kitchen, you could see cranes hunting. They kept bees and played pickleball and watched their children grow. But now Lindberg's right foot was out of rhythm. She worked hard to ignore it, but she couldn't disregard the tremors.


A Comparative Study of EMG- and IMU-based Gesture Recognition at the Wrist and Forearm

Baghernezhad, Soroush, Mohammadreza, Elaheh, da Fonseca, Vinicius Prado, Zou, Ting, Jiang, Xianta

arXiv.org Artificial Intelligence

Gestures are an integral part of our daily interactions with the environment. Hand gesture recognition (HGR) is the process of interpreting human intent through various input modalities, such as visual data (images and videos) and bio-signals. Bio-signals are widely used in HGR due to their ability to be captured non-invasively via sensors placed on the arm. Among these, surface electromyography (sEMG), which measures the electrical activity of muscles, is the most extensively studied modality. However, less-explored alternatives such as inertial measurement units (IMUs) can provide complementary information on subtle muscle movements, which makes them valuable for gesture recognition. In this study, we investigate the potential of using IMU signals from different muscle groups to capture user intent. Our results demonstrate that IMU signals contain sufficient information to serve as the sole input sensor for static gesture recognition. Moreover, we compare different muscle groups and check the quality of pattern recognition on individual muscle groups. We further found that tendon-induced micro-movement captured by IMUs is a major contributor to static gesture recognition. We believe that leveraging muscle micro-movement information can enhance the usability of prosthetic arms for amputees. This approach also offers new possibilities for hand gesture recognition in fields such as robotics, teleoperation, sign language interpretation, and beyond.